This AI deepfake is next level: Control expressions & motion

Поделиться
HTML-код
  • Опубликовано: 18 дек 2024

Комментарии •

  • @theAIsearch
    @theAIsearch  5 месяцев назад +25

    TurboType is a Chrome extension that helps you type faster with keyboard shortcuts. Try it today and start saving time. They have a free forever plan!
    www.turbotype.app/

    • @nartrab1
      @nartrab1 5 месяцев назад

      Thanks man, amazing video and I will try this hkey manager.

    • @KryzysX
      @KryzysX 5 месяцев назад +4

      Lowkey one of the best AI youtubers

    • @laylasmart
      @laylasmart 5 месяцев назад

      Free and open source? No such thing. If it was so, installing it wouldn't be needed. Double click and it would work.

    • @laylasmart
      @laylasmart 5 месяцев назад

      @@minhuang8848
      Harvesting info is the goal today. For that Microsoft made Copilot AI and Recall. A spyware within the operating system.

    • @arcanjl
      @arcanjl 5 месяцев назад

      Will it work on IOS devices?

  • @FriscoFatseas
    @FriscoFatseas 5 месяцев назад +34

    i normally dont mess with this stuff until there is an involved interface, super excited for this stuff to be working open source

    • @SVAFnemesis
      @SVAFnemesis 5 месяцев назад +8

      you're right. For reasons I could not fathom, very few AI tool were pushed to the point where they get a user friendly client like Midjourney. Even programmer like me struggles to pull a repository from git and build it up myself, I don't think any reguar artist out there can do AI without at least a client side.

    • @English_Lessons_Pre-Int_Interm
      @English_Lessons_Pre-Int_Interm 5 месяцев назад

      @@SVAFnemesis what, you don't like typing in Discord? heretic.

    • @SVAFnemesis
      @SVAFnemesis 5 месяцев назад +1

      @@English_Lessons_Pre-Int_Interm can you please carefully read and understand my comment.

    • @Techduturfu
      @Techduturfu 4 месяца назад +1

      @@SVAFnemesis I think, they were being sarcastic

  • @filipr3336
    @filipr3336 5 месяцев назад +83

    First thing I think of is a potential VR application. This would be in real time , your whole expression would be projected on your VR avatar.

    • @iankrasnow5383
      @iankrasnow5383 5 месяцев назад

      It looks like it still takes some processing time to run the software, so at least currently you couldn't animate an avatar in real time on a local recording. So for V-tubers, at least a few months off. But if it's run by a cloud service, maybe they could make it work.

    • @Metapharsical
      @Metapharsical 5 месяцев назад +2

      It's not new, Meta has already demo'd this type of realtime avatar . Zuck showed it off on Lex Friedman's podcast.
      It's realtime and very high quality , just requires some pre-processing of a full face scan

    • @jeff_clayton
      @jeff_clayton 5 месяцев назад

      @@iankrasnow5383 with the speed of tech innovations in the last few years, if they decide to work toward this it won't be that long

    • @artificiyal
      @artificiyal 5 месяцев назад +2

      video calling

    • @tusharbhatnagar8143
      @tusharbhatnagar8143 5 месяцев назад

      @@iankrasnow5383 A lot of time. Still not a consumer friendly implementation. Might take some more time to be realtime ready. Fingers crossed.

  • @gunnarswank
    @gunnarswank 5 месяцев назад +70

    How soon can I get a Hogwarts painting of my dead grandmother to tell me to wash my hands every hour?

    • @biocykle
      @biocykle 5 месяцев назад +12

      Now, if you want

    • @xuimod
      @xuimod 4 месяца назад +1

      now if you have the time and technically expertise or enough money to pay someone else to do it.

    • @sertocd
      @sertocd 4 месяца назад

      very quickly.. just get pinokio and install live portrait.. it's only 3 clicks. Also fooocus and stabilty cascade is great for midjourney quality ai stuff.

    • @Entropydemic
      @Entropydemic 4 месяца назад

      You can fast track this with Runway.

  • @elyakimlev
    @elyakimlev 5 месяцев назад +15

    wow, the option to animate a face on a source video has a great potential. I can already see people creating scenes of people interacting with each other with Runway Gen-3 or another video generator and then editing the video so that the people in the scene actually talk!
    We're one step closer to creating movie scenes.

    • @theAIsearch
      @theAIsearch  5 месяцев назад +2

      exactly!

    • @elyakimlev
      @elyakimlev 5 месяцев назад

      @@user-cz9bl6jp8b I don't know. I never tried it. I was only commenting on what I saw in the video.

    • @jaywv1981
      @jaywv1981 5 месяцев назад

      @@user-cz9bl6jp8b I'd like to know this too.

  • @KingLeoBull
    @KingLeoBull 17 дней назад

    I always tooo lazy to comment or like, subscribing is almost impossible but I did it all today on you video, just excellent. I was thinking from programmer point of view, your video thought me a lot today. Thanks mate from India.

  • @pepsico815
    @pepsico815 5 месяцев назад +160

    What happens if the source sticks out a tongue?

    • @mikezooper
      @mikezooper 5 месяцев назад +126

      The entire internet crashes.

    • @42ndMoose
      @42ndMoose 5 месяцев назад +15

      there are already artifacts with the teeth where it stays static as the hair is with the background, i'd imagine the same will happen with the tongue.
      huuuge step up for open-source nonetheless!

    • @vaolin1703
      @vaolin1703 5 месяцев назад +23

      Harambe is resurrected

    • @drmarioschannel
      @drmarioschannel 5 месяцев назад +3

      doesnt work

    • @lol_09.
      @lol_09. 5 месяцев назад +41

      What is bro planning to do

  • @Ferruccio_Guicciardi
    @Ferruccio_Guicciardi 3 месяца назад +1

    Amazing ! Thanks for sharing Live Portrait ! Also thanks for the TurboType tool too ! Amazing and practical !

  • @rainy.aesthetics
    @rainy.aesthetics 5 месяцев назад +25

    YOU ARE REALLY GIVING ME ALL THESE THINGS whenever I NEEDED THIS TO Make my animation!!!!

    • @monday304
      @monday304 5 месяцев назад +8

      Nice can I see your animation when you're finished?

    • @theAIsearch
      @theAIsearch  5 месяцев назад +8

      Good luck!

  • @ECHOPULSENEWS
    @ECHOPULSENEWS 5 месяцев назад +2

    This is probably the best instructional video I have seen lately. Going thru the steps in great detail. Thank you for that.!! Once installed it runs smoothly.

    • @theAIsearch
      @theAIsearch  5 месяцев назад

      You're very welcome!

    • @abhishekpatwal8576
      @abhishekpatwal8576 5 месяцев назад +1

      were you able to run it on a group photo to animate multiple faces? i was unable to

    • @ECHOPULSENEWS
      @ECHOPULSENEWS 5 месяцев назад

      @@abhishekpatwal8576 you can uncheck do crop and it does try but the result isnt there yet

  • @VintageForYou
    @VintageForYou 5 месяцев назад +5

    This is insanely fantastic for controlling expressions keep making your great videos.💯Top Notch.😁

  • @bmoviecreature1507
    @bmoviecreature1507 5 месяцев назад +2

    always appreciate you show every single steps to install. It's very helpful for person who is not familiar with any codes

  • @Dina_tankar_mina_ord
    @Dina_tankar_mina_ord 5 месяцев назад +16

    Create a source video from the movie "The Mask". When Jim's character becomes freaky with his eyes and mouth.

  • @joewhitfield6316
    @joewhitfield6316 3 месяца назад

    This is FANTASTIC! It's going to take a minute to "git" everything (the dependencies) installed and working properly (Mac OS Monterey), but this is Open Source, so I have nothing to complain about. I'll do whatever it takes and how ever long it takes to nail this one. Thanks for the tutorial!!

  • @juggernautknight2749
    @juggernautknight2749 5 месяцев назад +28

    Absolutely incredible!

  • @marcus8451
    @marcus8451 5 месяцев назад +1

    I need to recant and say that everything worked out in the end. It is necessary to install all the components first, and only at the end of it all can you install the platform. Thanks for the video.

    • @theAIsearch
      @theAIsearch  5 месяцев назад +1

      glad you got it to work!

    • @UserGram-1
      @UserGram-1 5 месяцев назад

      which py version did you use ?

    • @marcus8451
      @marcus8451 5 месяцев назад

      @@UserGram-1 What I did was follow the tutorial in this video and after four or five failed attempts it ended up working.

  • @stylezmorales
    @stylezmorales 5 месяцев назад +13

    Working on a cartoon about a mischeivous young girl named Yumi, I've been using AI since the beginning and finally found a way to make her consistently with apps that effectively use Character Reference techniques, I've even trained a model with my character. Being a creative partner with Pika, Leonardo AI, and finally Runway ML I am able to create a ton of content, but I will need to add the character animation, and while I actually turned my AI character into a fully rigged Metahuman, it's nice to know if and when I need a quick shot and don't have the time to set it up in Unreal, I can quicky generate my character in the scene, then I or my niece, who will do most of the performance stuff for Yumi, can act out and voice her and I can use that footage and audio to animate the clip. This is an amazing time to be in. As someone who uses AI as a tool, I can see the several use cases for stuff like this, and it's going to make life easier for me as I am a studio of one and I have zero budget to make most of my stuff. So using a combination of free tools and my natural resourcefulness I am starting to make head way. The one man film studio era is here now.

    • @jmg9509
      @jmg9509 5 месяцев назад +1

      I am right alongside you brother!

    • @kliersheed
      @kliersheed 5 месяцев назад

      can you give a short list as to how you would go for character consistency with free tools? i found it to be either lacking extremely (wasnt consistent) or its payed models i couldnt try.
      example:
      1. Ai x: its free and has no tokens, i use it to do this and that. then i can you this is step 2 for ..
      2. AI x2: also free and has no tokens, now you can...

  • @UnmotivatedTechToober
    @UnmotivatedTechToober 5 месяцев назад +13

    Part of the webui is a file (frpc_windows_amd64_v0.2) which is a reverse proxy utility. Looks extremely untrustworthy to me. Running under a virtual environment mitigates some of the risk but I'm still skeptical. You should really be running this in an extremely sandboxed operating system.

    • @kirtisozgur
      @kirtisozgur 5 месяцев назад

      China + open source + free = I'm gonna steal your life now.

    • @xazobiolhstsilibourdistis407
      @xazobiolhstsilibourdistis407 5 месяцев назад

      @@kirtisozgur yeah, pretty much

    • @theAIsearch
      @theAIsearch  5 месяцев назад +1

      Thanks for sharing!

    • @shabadooshabadoo4918
      @shabadooshabadoo4918 5 месяцев назад +2

      I also noticed when you add a video it says "uploading video" which seemed a little sus for a local install.

    • @IloveElsaofArendelle
      @IloveElsaofArendelle 5 месяцев назад

      Never trust a Chinese company that wants your data

  • @PINKALIMBA
    @PINKALIMBA 5 месяцев назад +8

    I got a problem during entering the line "conda activate LivePortrait". It returns "CondaError: Run 'conda init' before 'conda activate'. What shall I do?

    • @Bandaniji24
      @Bandaniji24 5 месяцев назад +6

      type this : conda init
      it will ask you to close the cmd
      then restart and do the same instructions
      after that type : conda activate LivePortrait

    • @PINKALIMBA
      @PINKALIMBA 5 месяцев назад +1

      @@Bandaniji24 Thank you! It worked. 🤝

    • @DUBSTalExP
      @DUBSTalExP 4 месяца назад

      Thank you!

  • @MikevomMars
    @MikevomMars 3 месяца назад

    I hope future versions of LivePortrait can do the entire body - or at least the upper part, including arms and hands. That'd be such a breakthrough in motion capturing technology!

  • @AlphaProto
    @AlphaProto 5 месяцев назад +90

    I could fix the lip sync of the Teenage Mutant Ninja Turtles movie with this!

    • @eccentricballad9039
      @eccentricballad9039 5 месяцев назад +5

      I can think of a hundred ways to use this creatively but i have only got 4GB VRAM.

    • @GameOver-qk2ys
      @GameOver-qk2ys 5 месяцев назад

      ​@@eccentricballad9039 Use Google colab👀

    • @matthewmcneill5320
      @matthewmcneill5320 5 месяцев назад

      You mean the original 1990 movie? Never noticed

    • @injectionAI
      @injectionAI 4 месяца назад

      i want to replace young jeff bridges and CLU in tron legacy!

    • @wecreateustv
      @wecreateustv 3 месяца назад

      heroic af!

  • @phantasiaentertainment2170
    @phantasiaentertainment2170 5 месяцев назад +4

    thats crazy! i wanted to create a yt channel for so long but didnt want to use my own voice nor face. i can do it now :)

    • @monday304
      @monday304 5 месяцев назад +2

      That's a great idea Good luck to you and your channel! Did you need a Chinese phone number to run this app?

    • @theAIsearch
      @theAIsearch  5 месяцев назад +5

      @monday304 LivePortrait doesn't require any number

  • @CraigUKgames
    @CraigUKgames 5 месяцев назад +1

    At 3:58 you start talking about how you can use live portrait on not just stationary images, but moving videos too. Then in none of the examples at the end did you show how to use it on videos. I have tried uploading video but they are not a supported format.
    What is going on?

  • @Anish-o6n
    @Anish-o6n 5 месяцев назад +4

    I subscribed to you when you had 2K subscribers, and today you have 150K. Bro, you totally deserve it, and your content is worth much more than 150K. Soon you will reach 1M. Love you bro, from India ❤

  • @looooool3145
    @looooool3145 5 месяцев назад +1

    It looks weird in motion, but if u pause the expressions at any point during its animation, it looks good and natural.

  • @gregblank247
    @gregblank247 5 месяцев назад +4

    But it looked completely unnatural to only move the head. The shoulders stayed perfectly still throughout.

  • @NoWay1969
    @NoWay1969 5 месяцев назад +1

    The video of a "girl rotating" at 4:10 is a Vermeer painting that someone has used AI to animate.

  • @PINKALIMBA
    @PINKALIMBA 5 месяцев назад +3

    I have a basic laptop without the NVidia graphic card, can I use this as well?

    • @wendigo53
      @wendigo53 2 месяца назад

      Maybe. The bulk of the computational work is on the server side.

  • @Jebu911
    @Jebu911 Месяц назад +1

    Makes me cry as my oldest friend my computer could not run good stuff like this.

  • @parallaxworld
    @parallaxworld 5 месяцев назад +6

    love your videos, keep up the good work :D

  • @BIPPITYYIPYIP
    @BIPPITYYIPYIP 5 месяцев назад

    i've been using this from beta on starting about 3 years ago. their current version is proprietary and does so much more and mimics lifelike in every way.

  • @icsecrets172
    @icsecrets172 5 месяцев назад +27

    Those people who can create those type of application in code , why they don't create an installation exe type ?

    • @ihenrynl
      @ihenrynl 5 месяцев назад +7

      because exe is a windows thingie, rest of the world uses a real unix operating system like mac and linux. ;)

    • @wormjuice7772
      @wormjuice7772 5 месяцев назад +5

      Exactly this, i guess the people who can make this kind of stuff are just used to doing stuff the hard way.
      A zipped .exe on a cloudstorage would have made this a no brainer.

    • @cesarsantos854
      @cesarsantos854 5 месяцев назад +3

      Laziness.

    • @PeterParker-tu9id
      @PeterParker-tu9id 5 месяцев назад

      There is this other program called chatgpt, I never used code or did any programming before but installed linux on an old computer as it takes up 90% less resources to run then windows and i just tell chatgpt what i want to do...copy command...paste command...and ive built custom personal apps somehow without knowing what im doing, if anything i just ask chatgpt or i copy and paste the output on the terminal and tell chat gpt to translate into english. If you have not yet experienced life apart from windows you will find that if you just jump out of that window and take a walk with the penguin into the rabbit hole there is a whole other world down there, vast and beautiful that is so free. If you take the plunge you will find the true meaning freedom in pc world and eventually come to the conclusion that you never knew that you have been locked up for so long behind that window that was preventing you from seeing what else is out there and all you had to do was open it and not be afraid to jump out. lol

    • @bartlx
      @bartlx 5 месяцев назад +3

      There sure is money to be made by creating 'easy' installers for (would be) popular applications like this that comprise a lot of dependencies. But it'll be a lot of work and a totally different expertise than giving photos an animated face.

  • @AIShipped
    @AIShipped 5 месяцев назад +2

    Very thorough tutorial and a very good project to cover!

  • @High-Tech-Geek
    @High-Tech-Geek 5 месяцев назад +4

    I love that you walk through the installation. Thank you!

  • @rgerber
    @rgerber 4 месяца назад

    3:16 she is just absolutely incredible. This easily exceeds cartoon animations
    *edit* alright found her: rayray facedancing facial expressions

  • @yessinjarraya6076
    @yessinjarraya6076 5 месяцев назад +78

    This is gonna be a nightmare soon enough ...

    • @nicktaylor5264
      @nicktaylor5264 5 месяцев назад +11

      Yea - one of my favourite (and somewhat fringe) concepts is that "Everything comes true in the end" - because the context around it changes.
      My fav example of this is "primitive tribes in 1970s National Geographic Magazine" being afraid of cameras because they thought they could steal your soul.
      Well.
      Here we are. We are within days of there being a browser extension that with a single click can superimpose any photo into any porn video.... to take any youtube video of anyone and turn it into a kind of voodoo doll or gollum than can be made to perform any action imaginable, including ringing up your family, friends, enemies and doing such a perfect impersonation of you that it is actually more realistic than you are yourself.
      In a way, the Algos already have voodoo dolls of you... a lifetime of clicks and comments etc, rows in a database tied by a single user_id.
      I think "Text" was a massive massive revolution in what it meant to be human because it collapsed the time dimension. Memories became something that took zero energy to maintain. I think AI is a process towards collapsing some other dimension, although I've yet to figure out what it is, and I might of course be talking bollocks.

    • @killerx4123
      @killerx4123 5 месяцев назад +9

      @@nicktaylor5264 i want what youre having

    • @CPB4444
      @CPB4444 5 месяцев назад +2

      @@nicktaylor5264 Beautifully written, I too need my overlord my one true leader, a god not to worship but to follow, the basilisk, one of our own making.

    • @rakinrahman890
      @rakinrahman890 5 месяцев назад

      ​@@nicktaylor5264☠️☠️☠️

    • @rakinrahman890
      @rakinrahman890 5 месяцев назад +1

      Sure, if you have no idea about tech. This AI is amazing and it's only gonna get better.

  • @smartduck904
    @smartduck904 5 месяцев назад +1

    This is going to be so great for video editing instead of having to animate facial animations for characters we could just use this software

  • @tomoki-v6o
    @tomoki-v6o 5 месяцев назад +30

    People with autism are unable to read emotions from facial expressions like normal people . this technology can help theme a lot . you can exagerate or magnify facial expression so they can understand you and be able to communicate effectively .
    for example a kid can understand wether his mom is mad or not.

    • @theAIsearch
      @theAIsearch  5 месяцев назад +1

      very interesting use case. thanks for sharing!

    • @KryzysX
      @KryzysX 5 месяцев назад +2

      I don't think it would be feasible to get it done in real time

    • @ronilevarez901
      @ronilevarez901 5 месяцев назад +4

      "like normal people".
      It's been a while since a comment made feel so "abnormal" 😐

    • @Leto2ndAtreides
      @Leto2ndAtreides 5 месяцев назад +3

      May be simpler to use AI to tell them what emotions someone has.

    • @hangry3102
      @hangry3102 5 месяцев назад

      @@KryzysX We already have real time face trackers, real time deepfakes, and we are definitely not that far off from getting real time generative AI of this quality.

  • @rgerber
    @rgerber 4 месяца назад

    really impressed and highly amused by the face expression acting

  • @PredictAnythingSoftware
    @PredictAnythingSoftware 5 месяцев назад +3

    How about using Videos instead of Images as a source file. Just like the samples you show, please show us how we can do that as well, thanks. Anyway, I have successfully installed this in my computer using phyton only env. And u r right, it generate so fast unlike any other video generation like hallo which I have installed as well.

    • @theAIsearch
      @theAIsearch  5 месяцев назад +1

      glad you got it to work. they will release the video feature soon github.com/KwaiVGI/LivePortrait/issues/27

  • @HariWiguna
    @HariWiguna 5 месяцев назад

    Thanks to your very detailed patient step by step instruction, I was able to generate my own live portraits.
    My results are not as perfect as the examples, but amazing nonetheless. Thank you! Thank you! Thank you!

  • @choppergirl
    @choppergirl 5 месяцев назад +15

    Yep, this install process is an effn nightmare, and I have Windows 10, Python, and Linux Mint installed under Hyper-V.
    One thing is for sure, AI projects have the absolute least intelligent user interfaces and installations methods. It's almost laughable how universally bad and fragmented they all are, and how none of them talk to each other.
    Who would release software with this many environment dependencies. Once he got to having to edit a path to tell Windows where conda was... I was like I've been down this road, editing the env path variable never works for me, this is fail.

    • @BT-vu2ek
      @BT-vu2ek 5 месяцев назад +2

      I'm right there with ya. Someone needs to have a serious talk with these software devs about simple user interfaces and self installing programs. I'll get excited when the interface says "Drop target face here" , "Drop source video here" , and then a big red button that says "GO". Until then, "Gee-whiz, that's interesting."

    • @AlphaProto
      @AlphaProto 5 месяцев назад +4

      I agree, I followed the instructions but I hit a wall after my computer could find the git. Very cool, program, but I will have to wait for it to be simplified.
      The problem is that usually happens when someone monetizes it with subscription fees.

    • @nickiesnook
      @nickiesnook 5 месяцев назад

      @@BT-vu2ek this would take the devs wayyy to long to do

    • @jtabox
      @jtabox 5 месяцев назад +7

      For starters, the people creating those demos/projects aren't UI designers, nor do they create the demos for widespread or commercial use, or even create them with the end-user in mind at all. They create them as part of their scientific papers and studies. Meaning it's a somewhat beautified version of their messy lab experiments. The fact they then release those projects as open source for everyone to use is something extra that we should be grateful about, not act entitled and complain because they didn't make a super-easy, no-code UI version for the most braindead of users.
      Of course the installation complexity varies, but I wouldn't say _any_ of those AI projects' demos are especially difficult to install. The absolutely vast majority of such projects are in Python, so once you get the hang of it it's easier. Also having Python environment managers (Conda/Mamba, etc) and git preinstalled usually is half of the work required. Besides that, if it's something specific you need help with, just open an issue in github and ask for help, people usually will answer you, as long as your question isn't "please hold my hand throughout the whole install process".

    • @MikeyDunksMusic
      @MikeyDunksMusic 5 месяцев назад +1

      Lol. Yeah, I love this stuff, but unfortunately, I'm like you. I sit out the first few months of new stuff now, waiting for some paid site to hopefully pick it up, and then I just give them my money. Lol.

  • @Kjxperience1
    @Kjxperience1 4 месяца назад

    THIS IS INSANE... All wanna be yahoo boys from Nigeria Say Hi. Your work is so easy now haha

  • @leoalphaproductions8642
    @leoalphaproductions8642 5 месяцев назад +4

    Why are they using Frank Tufano’s image? 😂

  • @SilverHand619
    @SilverHand619 3 месяца назад +1

    thanks man! you deserve much more subs and likes :]

  • @Azguilianify
    @Azguilianify 5 месяцев назад +2

    Thanks! It's absoluetly awesome as nodes in comfyUI!

    • @theAIsearch
      @theAIsearch  5 месяцев назад +1

      oh, is there a node for this already?

    • @Azguilianify
      @Azguilianify 5 месяцев назад +1

      @@theAIsearch Yup ^^

  • @ArcanePath360
    @ArcanePath360 5 месяцев назад +29

    The only catch is...
    Proceeds to list a thing that 99.99% of us won't be able to get round

    • @sickvr7680
      @sickvr7680 5 месяцев назад +2

      what is that thing??? o_O

    • @ArcanePath360
      @ArcanePath360 5 месяцев назад

      @@sickvr7680 You have to have a Chinese phone number

    • @English_Lessons_Pre-Int_Interm
      @English_Lessons_Pre-Int_Interm 5 месяцев назад

      @@sickvr7680 pay for the subscription and graphics card. It is enough to grow 5 children in Africa.

    • @fzigunov
      @fzigunov 5 месяцев назад

      Stop being lazy!!!

    • @ArcanePath360
      @ArcanePath360 5 месяцев назад

      @@fzigunov Lazy? I wouldn't know how to get a Chinese phone number, would you? And that's before jumping through the myriad hoops to get it installed.

  • @LoFiChillandBeatsVibe
    @LoFiChillandBeatsVibe 5 месяцев назад +1

    Great demo! Curious as to your CPU / GPU / ram configuration that you ran this on?

    • @theAIsearch
      @theAIsearch  5 месяцев назад

      Thanks. RTX 5000 ada, 16g vram. cpu is intel i7, but i dont think that matters

  • @PuissantPeacock
    @PuissantPeacock 5 месяцев назад +3

    I know a few people that I wish I could set their Target Lip Open Ratio to zero. Just sayin'.

  • @issa0013
    @issa0013 5 месяцев назад +1

    Can you make a video on your hardware? That setup you have looks cool

  • @simplereport8040
    @simplereport8040 5 месяцев назад +1

    Man this seems insane. I love your findings. Will test it tonight!
    Just one thing! If you could add the timer or how long it took to process that would be very much appreciated 🙏

    • @theAIsearch
      @theAIsearch  5 месяцев назад +1

      Thanks. For a 10s video, it took maybe 1-2min to generate. Very quick compared to other tools

    • @simplereport8040
      @simplereport8040 5 месяцев назад

      @@theAIsearch thank you very much! That’s waaay faster than I expected! 🤯

  • @Marcdaddy10
    @Marcdaddy10 5 месяцев назад +73

    99% of people liking this will never be able to get this working, even if they try. They like it immediately after watching RUclips videos, and never do anything with it

    • @TheTruthIsGonnaHurt
      @TheTruthIsGonnaHurt 5 месяцев назад +11

      I sadly agree with you. I think it's the way he presents the information. For example, he starts of saying do this quick thing, and then it turns into a multi-step process that should have been explained in another video. Saying something is quick, and then expanding into something that many people would feel is not quick, will ultimately turn people off.
      If you prep them ahead of time that it will be a daunting task, they will mentally be ready for it, or will watch the video when they have the appropriate amount of time.

    • @nietzchan
      @nietzchan 5 месяцев назад

      I'm not even trying. The use case is still very limited, the driving video need to be really clear with minimum shoulder movements, but to get it working is actually pretty 'simple' if you already knew how to use Comfy UI and have used InsightFace before, at the bottom of the git page you can find the community resource that you can use to get it working with Comfy UI.

    • @RobinHahnRN
      @RobinHahnRN 5 месяцев назад

      I actually got this working in ComfyUI, but my results aren't that flash, unfortunately. Might have to try this in a venv... 😕

    • @biggerbitcoin5126
      @biggerbitcoin5126 5 месяцев назад

      It's too technical imo, like this would take ages to dissect

    • @StratOCE
      @StratOCE 5 месяцев назад +7

      @@biggerbitcoin5126 lmao he literally gave you a step by step, most technical things don't go anywhere near as far as he went to describe how to do it. Are you able to understand basic English? Do you have comprehension skills? This isn't even a technical vs non-technical issue at this point.

  • @heartshinemusic
    @heartshinemusic 5 месяцев назад

    Wow, great stuff. This is the next step I was waiting for. All the puzzle pieces are falling together... moving characters and then making them talk or sing. It just all needs to come together in one platform to combine different features to create movies or music videos or virtual avatars. Thanks for sharing! EDIT: Can you make this work with 16:9 ratio images? I see a lot of lip-sync programs that are just square.

  • @AlexClarkcompany
    @AlexClarkcompany 5 месяцев назад +56

    I heard several individuals at Salt Shack saying that the market is ripe, so I'm thinking about investing some money in stocks. Is it a good time to buy stock? I have almost $545K in equity from the sale of my property, but I'm unsure what to do with it. Should I buy shares now or wait for a better opportunity?

    • @LouisMorganxb3
      @LouisMorganxb3 5 месяцев назад +1

      Of course, but you shouldn't enter the market blindly just because there are prospects there. I'll urge you to get professional assistance in order to comprehend the possible aspects that could contribute to your financial growth.

    • @OscarOwenn
      @OscarOwenn 5 месяцев назад +1

      Many people underestimate the need of a financial advisor until they are burned by their own emotions. I recall that after a long divorce, I needed a good boost to keep my firm afloat, so I looked for licensed consultants and found someone with the highest qualifications. Despite inflation, she has helped me increase my reserve from $275k to $850k.

    • @AlexClarkcompany
      @AlexClarkcompany 5 месяцев назад

      This is definitely considerable! think you could suggest any professional/advisors i can get on the phone with? I'm in dire need of proper portfolio allocation.

    • @OscarOwenn
      @OscarOwenn 5 месяцев назад

      My CFA ’Leah Foster Alderman’, a renowned figure in her line of work. I recommend researching her credentials further. She has many years of experience and is a valuable resource for anyone looking to navigate the financial market.

    • @AlexClarkcompany
      @AlexClarkcompany 5 месяцев назад

      I just googled her and I'm really impressed with her credentials, I reached out to her since I need all the assistance I can get. I just scheduled a caII.

  • @adeusexmachina
    @adeusexmachina 5 месяцев назад

    You are guiding like a god. Thanks for instructions

  • @gato9484
    @gato9484 5 месяцев назад +3

    uwu

  • @jencodeit
    @jencodeit 5 месяцев назад

    Amazing feature 🤩Greatly appreciate doing this super simple video guide on how to use this tool. Game changer! Thanks so muchhhh

  • @ParvathyKapoor
    @ParvathyKapoor 5 месяцев назад

    Available in pinokio?

  • @patnor7354
    @patnor7354 5 месяцев назад +1

    This is awesome. AI is really advancing fast.

  • @wesleyworkman2372
    @wesleyworkman2372 5 месяцев назад

    The cat chopping chicken was awesome!

  • @pgfrank2351
    @pgfrank2351 4 месяца назад +2

    everything was going well until I tried pasted in the gradio interface I get "No module named 'torch'" please help

  • @edmundleung2098
    @edmundleung2098 3 месяца назад

    Wah cool!!!! Can you do Xi?

  • @JohnSundayBigChin
    @JohnSundayBigChin 4 месяца назад

    Thanks for the tutorial...I was finally able to install miniconda without any problems.

  • @kyrolazioko4783
    @kyrolazioko4783 4 месяца назад

    I followed the steps exactly, and was all fine until the 12:13 mark. CMD still told me 'conda' is not recognized as an internal or external command, operable program or batch file.
    Opening CMD and doing conda --version showed that it was installed though.

    • @theAIsearch
      @theAIsearch  4 месяца назад

      open a new cmd and try again

  • @paralucent3653
    @paralucent3653 5 месяцев назад

    It's works very well when the source photo and input video are at the same angle but there is obvious warping when the angle is different. Best to keep the angles the same.

  • @sleepy_dobe
    @sleepy_dobe 5 месяцев назад

    That's it, I'm never ever gonna believe what I see online or on digital media anymore.

  • @High-Tech-Geek
    @High-Tech-Geek 5 месяцев назад

    3:15 the images don't follow the "driving video" eyes at all. It cannot reproduce cross-eyed or side eyes at all.

  • @faizankhan6045
    @faizankhan6045 4 месяца назад +1

    Bro i think u should also start basic tutorials about python pip requirements installations and anaconda git and basics of using ai locally so atleast people can solve their errors while running it like in your pc everything is already installed and here after pip install requirements its saying tourch isn't available

  • @unknownuser3000
    @unknownuser3000 4 месяца назад

    Facefusion is the best choice since its a stable diffusion extension

  • @Nakul-f1rq
    @Nakul-f1rq 5 месяцев назад +1

    Best channel 🔥🔥🔥

  • @apolloixixix
    @apolloixixix 2 месяца назад

    this looks awesome. would you say it works better by installing it locally? also if you install it locally do you still have to pay for a membership or is there a one time fee to buy it forever lol

    • @theAIsearch
      @theAIsearch  2 месяца назад +1

      it depends on your hardware. if you have a good cuda gpu, it runs great. if you, it's best to run it online via huggingface or other platforms. it's free to install locally

    • @apolloixixix
      @apolloixixix 2 месяца назад

      @@theAIsearch thank you!

  • @Crisisdarkness
    @Crisisdarkness 5 месяцев назад

    Wow, you always present great advances in AI, and open source, I am very grateful for your channel, I will try this soon

  • @thethaovatoquoc312
    @thethaovatoquoc312 24 дня назад

    Great tutorial! Can the driving video be longer than a few seconds, like 5 minutes? Thank you.

  • @DamonCzanik
    @DamonCzanik 5 месяцев назад

    I literally thought, "This would be amazing if it could only do animals too", and 30 seconds later.... he shows it doing animals too. Can't wait to try this one out.

  • @TomiTom1234
    @TomiTom1234 5 месяцев назад +2

    I tried to upload a video instead of a picture, but it doesn't allow any extension other than a photo extension. But you showed in your video that it is also possible to add a video 🤔

    • @PredictAnythingSoftware
      @PredictAnythingSoftware 5 месяцев назад

      I hope someone else will show us how to do it. I want to know how to do that as well.

    • @theAIsearch
      @theAIsearch  5 месяцев назад +1

      they will release the video feature 'in a few days' github.com/KwaiVGI/LivePortrait/issues/27

  • @nathanrunda6053
    @nathanrunda6053 5 месяцев назад +2

    What is the output resolution of these videos?

  • @davimak4671
    @davimak4671 5 месяцев назад

    Bro, good continuation would be to explain how to use it - vid to vid. Can you explain this? I saw examples when guys making vid to vid using liveportait and this is awesome

  • @alexeyled4680
    @alexeyled4680 3 месяца назад +1

    And if I need to transfer a facial expression not to a video but to a photo, how can I do this?

  • @AdvantestInc
    @AdvantestInc 5 месяцев назад

    Amazing demo! What are the potential security implications of using AI deepfake technology like Live Portrait? Are there measures in place to prevent misuse?

  • @sosameta
    @sosameta 5 месяцев назад +1

    insane use cases are coming

  • @BeePositive-h5u
    @BeePositive-h5u Месяц назад +1

    How can I run this LivePortrait, if i'm using an AMD GPU?

  • @francom121ify
    @francom121ify 5 месяцев назад +1

    Great video! Just wondering how you do a video as source and a video as the driving, i only see image as the source option. If you can let us know. Thanks!

    • @theAIsearch
      @theAIsearch  5 месяцев назад +2

      it will be released soon: github.com/KwaiVGI/LivePortrait/issues/27

  • @sird135
    @sird135 5 месяцев назад

    The numa numa songs are gonna upgrade with this

  • @DisciplineLifeStyleR
    @DisciplineLifeStyleR 2 месяца назад +1

    it works only with GPU? bec torch give me some problems w the installation

  • @olekanuriel9359
    @olekanuriel9359 2 месяца назад

    can this be integrated in to blender to animate 3d faces, i imagine it would be easier for it to read 3d faces than 2d, right?

  • @adilsiddiqui7207
    @adilsiddiqui7207 5 месяцев назад

    Just a question.. if the source video is already talking in the video and then if you put Driving video full of talk and expressions as well.. then what would be the outcome..??
    Will it mask the driving video talk on a source video talk??
    and thanks for the video and information..👍👍👍

    • @theAIsearch
      @theAIsearch  5 месяцев назад

      good question. they haven't released the video feature yet, but thats a good thing to test out

  • @lamanchatecno9684
    @lamanchatecno9684 5 месяцев назад

    Thank you. Excellent Video, very detailed and well explained. New subscriber

  • @ai-bokki
    @ai-bokki 5 месяцев назад

    TurboType is great! was looking for something like that.
    Btw, Your input files are very small. How long does it take to render? Can we change a 1080p video that is of 30seconds? the input would be around 100mb

  • @kukukachu
    @kukukachu 5 месяцев назад +1

    Dell and Nvidia huh? Were one of those your Chinese friends that gave you the code :D

  • @infinit854
    @infinit854 29 дней назад +1

    Can it run in realtime with a webcam feed?

  • @DanMalandragem
    @DanMalandragem 5 месяцев назад

    how do i create a file like exe to just open it up later? i could use it first time but when i closed the tap and cmd i wasnt able to open again...

    • @joseavilasg
      @joseavilasg 5 месяцев назад

      You could create a .bat instead and run commands there.

    • @nnkaz1k856
      @nnkaz1k856 5 месяцев назад

      Maybe auto-py-to-exe

  • @Puppetgate
    @Puppetgate 4 месяца назад +1

    Any idea why I am getting this error, everything else worked up until this point:
    C:\Users\funky\Desktop\liveportrait\LivePortrait>conda create -n LivePortrait python==3.9
    'conda' is not recognized as an internal or external command,
    operable program or batch file.

    • @hernanhidalgo8246
      @hernanhidalgo8246 4 месяца назад

      I have the same issue

    • @goblinaiz
      @goblinaiz 4 месяца назад

      same issue here

    • @acethemaker-aceforce2
      @acethemaker-aceforce2 4 месяца назад

      check conda version of python and install it. conda not recognized means it needs this program to run app

    • @MooseKnuckleWarrior
      @MooseKnuckleWarrior 3 месяца назад

      Add .18 to the end
      conda create -n LivePortrait python==3.9.18
      I almost missed it myself, but it's in text at the bottom of the screen, correcting the one from GitHub

    • @MoonLiteNite
      @MoonLiteNite 3 месяца назад

      you didn't set your env path correctly. Can fix that, or manually go to the path before issuing the commands

  • @1edber
    @1edber 5 месяцев назад

    how do you do the sample you showed of video transposed to a video source (4:17)? Thanks!

    • @theAIsearch
      @theAIsearch  5 месяцев назад +1

      they will release it soon github.com/KwaiVGI/LivePortrait/issues/27

    • @1edber
      @1edber 5 месяцев назад

      @@theAIsearch cool. thanks for the reply! :D

  • @MrDanINSANE
    @MrDanINSANE 5 месяцев назад +2

    Very cool, thank you for sharing ❤
    Too bad it won't work on VIDEOS or Multiple Faces like in their examples.

  • @Jockerai
    @Jockerai 3 месяца назад

    I cant solve "CUDA is not compatible with Torch" error! which version with which version are compatible? I have Cuda 12.1

  • @jantube358
    @jantube358 4 месяца назад

    Can you do it in real time? There is a cam icon under the video input. What happens when the face turns away from the camera?

  • @thays182
    @thays182 5 месяцев назад

    Amazing walkthrough! Thank you! I'm having an issue where the output video bobbles around and shakes a bit. I'm not seeing this in your examples. Any ideas on why or how a shake is happening on the output?

  • @saffetkucuk3195
    @saffetkucuk3195 2 месяца назад

    Hello dear. thank you for sharing. I got this error and I didn't figure it out. could you help me? thanks in advance...."The requested GPU duration (240s) is larger than the maximum allowed retry in -1 day, 23:59:59"

  • @AlejandroGuerrero
    @AlejandroGuerrero 5 месяцев назад

    This turorial was great. Thanks. Question: Is there any tool (like this to run locally) to upload an mp3 voiceover and generate the mouth and eye movements to later use in this process? Thanks!

    • @theAIsearch
      @theAIsearch  5 месяцев назад

      yes, is this what you're looking for? ruclips.net/video/rlnjcRP4oVc/видео.html

  • @davidblenkinsopp9597
    @davidblenkinsopp9597 3 месяца назад

    What is the minimum graphics specification as I keep getting a delay when trying to do voice sync